This page last changed on Dec 23, 2005 by sbannasch.
Ann's documentation
You can find Ann's documentation about inside the Teemss2Website project. See below for notes how to check that out that project. The documention is located:
Teemss2Website/documentation/new/all_instructions.doc
Look in the section called XSLT Processing
Looking at the Pipeline Code
The best way to know what is going on in the pipeline is to look at the java code that is running the pipeline.
You can find it in Eclipse:
- check out the Teemss2Website project:
The cvsroot is:
cvs.concord.org:/home/buildspace/CodeBank/
The location of the of project is:
Projects/Teemss2Website
To check it out in eclipse switch to the CVS Respository perspective. Then select
the cvs repository you want to browser, select HEAD, and then select Projects folder.
Then right click on it and select CheckOut
- If you are in the Java perspective (top right of eclipse tells you which perspective)
then you will see a folder called:
process/WEB_INF/classes
expand that folder
- inside is a file called ProcessSubmit.java, double click on it.
- inside that file is a file that describes the different processing steps. You can look for "inOutStylesheetWithParametersTranslet" method calls to see where the processing is done.
You shouldn't have to modify this file, but you can use it to figure what is gone one when the different check boxes are selected on the portal publishing page.
On maggie you can find the stylesheets and intermediate xml files in:
/home/teemss2/tomcat/webapps/process
the tomcat directory tree is actually symlinked to here:
maggie:/home/teemss2/tomcat -> /var/tomcat/teemss2_tomcat
when you make a change to the xslt and put it on the server it is useful to watch the log to see if there is an error in your work. The log is located at /home/teemss1/tomcat/logs/catalina.out so I usually do:
tail -f tomcat/logs/catalina.out
Common steps in all publishing
All of this occurs in scdir: /var/tomcat/teemss2_tomcat/webapps/process/
use XMLDBMS to go from schema created by apache torque to xml dump of data from database tables
Transfer1.process(true, "tempy3.map", "xmldbms_out.xml", "cc_top", tkey, "x_out2.xml")
...do replaceAll function which was in Transfer1 as xslt step. This is Scott's xslt - saves over 2 minutes processing
replaceAll.xslt, xmldbms_out.xml, replaceAll.xml
...exclude units whose status is 0 in unit table
exclude_unit.xslt, replaceAll.xml, excludedData.xml
...get the static data collector file for the data collectors to graph images
static_data_collectors.xslt, excludedData.xml, makeGraphImages.xml
The web and otml (portfolio) processes output to dataWithNav.xml while the LabBook process outputs to dataWithNavLab.xml
...add nav tags and rewrite data collector info under data collector response
web and otml
add_nav.xslt, excludedData.xml, dataWithNav.xml
expand_db => true
...add nav tags and rewrite data collector info under data collector response
labbook
add_nav.xslt, excludedData.xml, dataWithNavLab.xml,
expand_db => false
.. and then add table count column tags and process responseTables info under data collector response
prep_labbook.xslt, dataWithNavLab.xml, prepared_labbook.xml
The Public web processing
process using public_web.xslt all seven companies for both public and teacher web pages ... this really a more complicated nested loop in the code
for each company into teemss2
public_web.xslt, dataWithNav.xml, public_web_out.xml:
app_use => public, sectionDir => teemss2, sectionUrl => teemss2,
portalUrl => /teemss2-website2/populatePortal.do, displayTeacherNotes => false,
displayTeacherAnswers => false, topDir => teemss2, companyDir => company,
pubDir => ???, displayAuthLinks => false
end
for each company into teemss2-teacher
public_web.xslt, dataWithNav.xml, public_web_out.xml:
app_use => public, sectionDir => teemss2-teacher, sectionUrl => teemss2-teacher,
portalUrl => /teemss2-website2/populatePortal.do, displayTeacherNotes => true,
displayTeacherAnswers => false, topDir => teemss2-teacher, companyDir => company,
pubDir => ???, displayAuthLinks => false
end
The xslt script: public_web.xslt imports the following xslt files:
- common_all.xslt
- common_web.xslt
- public_all.xslt
- templates_web.xslt
The public otml processing
... process using public_ot.xslt
public_ot.xslt, dataWithNav.xml, public_ot.xml
section => project, sectionDir => public,
sectionUrl => diskdir+"/webapps/teemss2-website2/otrunk",
displayTeacherNotes => false, displayTeacherAnswers => false,
displayAuthLinks => false
The xslt script: public_ot.xslt imports the following xslt files:
- common_all.xslt
- common_ot.xslt
- public_all.xslt
- templates_ot.xslt
... and then process public_ot.xml for each vendor (basically creating one or more a OTDeviceConfigs and a vendor_id) and package the result into a jar
vernier_ot.xslt, public_ot.xml, public-vernier.otml
processJars: public-vernier.otml => public-vernier.jar
fourier_ot.xslt, public_ot.xml, public-fourier.otml
processJars: public-fourier.otml => public-fourier.jar
pasco_ot.xslt, public_ot.xml, public-pasco.otml
processJars: public-pasco.otml => public-pasco.jar
data_harvest_ot.xslt, public_ot.xml, public-data_harvest.otml
processJars: public-data_harvest.otml => public-data_harvest.jar
texas_instruments_ot.xslt, public_ot.xml, public-texas_instruments.otml
processJars: public-texas_instruments.otml => public-texas_instruments.jar
none_ot.xslt, public_ot.xml, public-none.otml
processJars: public-none.otml => public-none.jar
The public LabBook processing
.. first convert prepared_labbook.xml into vendor-specific labbooks
teemss2_labbook_hr.xslt, prepared_labbook.xml, labbooks/labbook-imagi_lc.xml
sensorInterfaceId => 5
imageDir => /web/teemss2.concord.org/palm/processed_images/Units
teemss2_labbook_hr.xslt, prepared_labbook.xml, labbooks/labbook-airlink.xml
sensorInterfaceId => 22
imageDir => /web/teemss2.concord.org/palm/processed_images/Units
... Now use the xml2labbook jar to create PDB files
|